Multi-Recursive Constraint Demotion

نویسنده

  • Bruce B. Tesar
چکیده

A signi cant source of di culty in language learning is the presumed \incompleteness" of the overt information available to a language learner, termed here an `overt form', when they hear an utterance. The complete structural description assigned to an utterance by linguistic analysis includes representational elements not directly apparent in the overt form, but which play a critical role in linguistic theory. Because the central principles of linguistic theory, including those determining the space of possible human grammars, make reference to these elements of `hidden structure', recovering them is necessary if the overt data are to be brought to bear on the task of determining the correct grammar. Hidden structure, although not directly perceivable, need not be a great di culty if it can easily be reconstructed based upon the overt form. Hidden structure becomes a problem when the overt form is ambiguous. If a given overt form is consistent with two or more di erent full structural descriptions, then the correct structural description cannot be determined from the information in that overt form alone. Presumably, other data, from other overt forms, is necessary to determine the correct structural description. In Optimality Theory (Prince and Smolensky, 1993), learning a grammar means nding a correct ranking for the universal constraints. The learner, given a collection overt forms (presumed to be the overt re exes of grammatical utterances), must arrive at a ranking of the constraints such that, for each overt form, there is a matching structural description which is optimal for some input under that ranking. Tesar and Smolensky (Tesar and Smolensky, to appear) have demonstrated that, given the correct full structural descriptions, a constraint ranking can be determined e ciently which makes all of those structural descriptions optimal. Thus, if the problem of hidden structure can be overcome, constraint rankings can be learned. Recent work by Tesar on language learning in Optimality Theory has used an iterative strategy to approach the problem of determining hidden structure (Tesar, to appear) (Tesar, 1997). The strategy processes overt forms in serial fashion, one at a time. One notable property of that work is that, when the processing of an overt form is complete, the procedure retains as information only a single hypothesized constraint hierarchy. Upon receipt of an overt form, the algorithm modi es its hypothesized constraint ranking as necessary to accommodate the overt form, but then retains only the resulting constraint hierarchy as information when moving on to the next overt form. This behavior is standard practice in what is known as the \language learnability in the limit" framework(Gold, 1967). One motivation for this type of limitation is to avoid learning procedures which remember an unbounded number of utterances. However, limiting the

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence of Error-driven Ranking Algorithms

According to the OT error-driven ranking model of language acquisition, the learner performs a sequence of slight re-rankings triggered by mistakes on the incoming stream of data, until it converges to a ranking that makes no more mistakes. This learning model is very popular in the OT acquisition literature, in particular because it predicts a sequence of rankings that models gradualness in ch...

متن کامل

Counting Rankings

In this paper, I present a recursive algorithm that calculates the number of rankings that are consistent with a set of data (i.e. optimal candidates) in the framework of Optimality Theory. The ability to compute this quantity, which I call the r-volume, makes possible a simple and effective Bayesian heuristic in learning – all else equal, choose the candidate preferred by the highest number of...

متن کامل

Some Correct Error-driven Versions of the Constraint Demotion Algorithm

This paper shows that Error-Driven Constraint Demotion (EDCD), an errordriven learning algorithm proposed by Tesar (1995) for Prince and Smolensky’s (1993) version of Optimality Theory, can fail to converge to a totally ranked hierarchy of constraints, unlike the earlier non-error-driven learning algorithms proposed by Tesar and Smolensky (1993). The cause of the problem is found in Tesar’s use...

متن کامل

Tools for the robust analysis of error-driven ranking algorithms and their implications for modelling the child's acquisition of phonotactics

Error-driven ranking algorithms (EDRAs) perform a sequence of slight re-rankings of the constraint set triggered by mistakes on the incoming stream of data. In general, the sequence of rankings entertained by the algorithm, and in particular the final ranking entertained at convergence, depend not only on the grammar the algorithm is trained on, but also on the specific way data are sampled fro...

متن کامل

Using Disjunctive orderings instead of Conflict Resolution in Partial order planning

Resolving conflicts in partial order planning by means of promotion and demotion has been an standard idea in least commitment planning for over twenty years. Recent analyses of partial order planning show that promotion/demotion refinements are optional from the point of view of completeness. Furthermore, promotion/demotion refinements run counter to the idea of least-commitment since the plan...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1997